An Improved Machine Learning-Based Employees Attrition Prediction Framework with Emphasis on Feature Selection

نویسندگان

چکیده

Companies always seek ways to make their professional employees stay with them reduce extra recruiting and training costs. Predicting whether a particular employee may leave or not will help the company preventive decisions. Unlike physical systems, human resource problems cannot be described by scientific-analytical formula. Therefore, machine learning approaches are best tools for this aim. This paper presents three-stage (pre-processing, processing, post-processing) framework attrition prediction. An IBM HR dataset is chosen as case study. Since there several features in dataset, “max-out” feature selection method proposed dimension reduction pre-processing stage. implemented dataset. The coefficient of each logistic regression model shows importance results show improvement F1-score performance measure due method. Finally, validity parameters checked multiple bootstrap datasets. Then, average standard deviation analyzed check confidence value model’s stability. small indicates that stable more likely generalize well.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Correlation-based Feature Selection for Machine Learning

A central problem in machine learning is identifying a representative set of features from which to construct a classification model for a particular task. This thesis addresses the problem of feature selection for machine learning through a correlation based approach. The central hypothesis is that good feature sets contain features that are highly correlated with the class, yet uncorrelated w...

متن کامل

An Improved K-Nearest Neighbor with Crow Search Algorithm for Feature Selection in Text Documents Classification

The Internet provides easy access to a kind of library resources. However, classification of documents from a large amount of data is still an issue and demands time and energy to find certain documents. Classification of similar documents in specific classes of data can reduce the time for searching the required data, particularly text documents. This is further facilitated by using Artificial...

متن کامل

An Improved Flower Pollination Algorithm with AdaBoost Algorithm for Feature Selection in Text Documents Classification

In recent years, production of text documents has seen an exponential growth, which is the reason why their proper classification seems necessary for better access. One of the main problems of classifying text documents is working in high-dimensional feature space. Feature Selection (FS) is one of the ways to reduce the number of text attributes. So, working with a great bulk of the feature spa...

متن کامل

An Improved K-Nearest Neighbor with Crow Search Algorithm for Feature Selection in Text Documents Classification

The Internet provides easy access to a kind of library resources. However, classification of documents from a large amount of data is still an issue and demands time and energy to find certain documents. Classification of similar documents in specific classes of data can reduce the time for searching the required data, particularly text documents. This is further facilitated by using Artificial...

متن کامل

An Improved Flower Pollination Algorithm with AdaBoost Algorithm for Feature Selection in Text Documents Classification

In recent years, production of text documents has seen an exponential growth, which is the reason why their proper classification seems necessary for better access. One of the main problems of classifying text documents is working in high-dimensional feature space. Feature Selection (FS) is one of the ways to reduce the number of text attributes. So, working with a great bulk of the feature spa...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Mathematics

سال: 2021

ISSN: ['2227-7390']

DOI: https://doi.org/10.3390/math9111226